翻訳と辞書
Words near each other
・ Parallel algorithm
・ Parallel and cross cousins
・ Parallel array
・ Parallel ATA
・ Parallel axis theorem
・ Parallel bars
・ Parallel Bus Interface
・ Parallel card
・ Parallel Cinema
・ Parallel college
・ Parallel College (film)
・ Parallel Colt
・ Parallel communication
・ Parallel compression
・ Parallel computation thesis
Parallel computing
・ Parallel constraint satisfaction processes
・ Parallel construction
・ Parallel coordinates
・ Parallel Creek
・ Parallel curve
・ Parallel database
・ Parallel Dimensions (album)
・ Parallel Dreams
・ Parallel education
・ Parallel Element Processing Ensemble
・ Parallel evolution
・ Parallel Extensions
・ Parallel fiber
・ Parallel Flaming


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Parallel computing : ウィキペディア英語版
Parallel computing

Parallel computing is a type of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling.〔S.V. Adve ''et al.'' (November 2008). ("Parallel Computing Research at Illinois: The UPCRC Agenda" ) (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."〕 As power consumption (and consequently heat generation) by computers has become a concern in recent years,〔Asanovic ''et al.'' Old (wisdom ): Power is free, but transistors are expensive. New (wisdom ) is () power is expensive, but transistors are "free".〕 parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.〔Asanovic, Krste ''et al.'' (December 18, 2006). ("The Landscape of Parallel Computing Research: A View from Berkeley" ) (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old (wisdom ): Increasing clock frequency is the primary method of improving processor performance. New (wisdom ): Increasing parallelism is the primary method of improving processor performance… Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limits."〕
Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU).〔"Concurrency is not Parallelism", ''Waza conference'' Jan 11, 2012, Rob Pike ((slides )) ((video ))〕〔(【引用サイトリンク】title=Parallelism vs. Concurrency )〕 In parallel computing, a computational task is typically broken down in several, often many, very similar subtasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.
In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.
A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law.
==Background==
Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed.
Parallel computing, on the other hand, uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above.〔
Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004. The runtime of a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs.
However, power consumption ''P'' by a chip is given by the equation ''P'' = ''C'' × ''V'' 2 × ''F'', where ''C'' is the capacitance being switched per clock cycle (proportional to the number of transistors whose inputs change), ''V'' is voltage, and ''F'' is the processor frequency (cycles per second). Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 8, 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm.
Moore's law is the empirical observation that the number of transistors in a microprocessor doubles every 18 to 24 months.〔
〕 Despite power consumption issues, and repeated predictions of its end, Moore's law is still in effect. With the end of frequency scaling, these additional transistors (which are no longer used for frequency scaling) can be used to add extra hardware for parallel computing.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Parallel computing」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.